Practice: confusion matrix terminology
Confusion matrix questions
Imagine a spam filter model where emails labeled 1 = spam, 0 = not spam. If a spam email is incorrectly classified as not spam, what kind of error is this?
- A false positive
- A true positive
- A false negative
- A true negative
Confusion matrix questions
In an intrusion detection system, 1 = intrusion, 0 = safe. If the system misses an actual intrusion and classifies it as safe, this is a:
- A false positive
- A true positive
- A false negative
- A true negative
Confusion matrix questions
In a medical test for a disease, 1 = diseased, 0 = healthy. If a healthy patient is incorrectly diagnosed as diseased, that’s a:
- A false positive
- A true positive
- A false negative
- A true negative
Now that we understand the different types of errors, we can explore metrics that better capture model performance when accuracy falls short.
Precision, Recall, F1-Score
![]()
iClicker Exercise 9.1
Select all of the following statements which are TRUE.
- In medical diagnosis, false positives are more damaging than false negatives (assume “positive” means the person has a disease, “negative” means they don’t).
- In spam classification, false positives are more damaging than false negatives (assume “positive” means the email is spam, “negative” means they it’s not).
- If method A gets a higher accuracy than method B, that means its precision is also higher.
- If method A gets a higher accuracy than method B, that means its recall is also higher.
Counter examples
Method A - higher accuracy but lower precision
Method B - lower accuracy but higher precision
Thresholding
The above metrics assume a fixed threshold.
We use thresholding to get the binary prediction.
A typical threshold is 0.5.
- A prediction of 0.90 \(\rightarrow\) a high likelihood that the transaction is fraudulent and we predict fraud
- A prediction of 0.20 \(\rightarrow\) a low likelihood that the transaction is non-fraudulent and we predict Non fraud
What happens if the predicted score is equal to the chosen threshold?
Play with classification thresholds
iClicker Exercise 9.2
Select all of the following statements which are TRUE.
- If we increase the classification threshold, both true and false positives are likely to decrease.
- If we increase the classification threshold, both true and false negatives are likely to decrease.
- Lowering the classification threshold generally increases the model’s recall.
- Raising the classification threshold can improve the precision of the model if it effectively reduces the number of false positives without significantly affecting true positives.
PR curve
- Calculate precision and recall (TPR) at every possible threshold and graph them.
- Better choice for highly imbalanced datasets
ROC curve
- Calculate the true positive rate (TPR) and false positive rate (FPR) at every possible thresholding and graph TPR over FPR.
- Good choice when the datasets are roughly balanced.
![]()
AUC
- The area under the ROC curve (AUC) represents the probability that the model, if given a randomly chosen positive and negative example, will rank the positive higher than the negative.
ROC AUC questions
Consider the points A, B, and C in the following diagram, each representing a threshold. Which threshold would you pick in each scenario?
- If false positives (false alarms) are highly costly
- If false positives are cheap and false negatives (missed true positives) highly costly
- If the costs are roughly equivalent
Source